Assertion Roulette
The book has now been published and the content of this chapter has likely changed substanstially.Please see page 224 of xUnit Test Patterns for the latest information.
It is hard to tell which of several assertions within the same test method caused a test failure.
Symptoms
A test fails. Upon examining the output of the Test Runner (page X), we cannot determine exactly which assertion had failed.
Impact
When a test fails during an automated Integration Build[SCM], it may be hard to tell exactly which assertion failed. If the problem cannot be reproduced on a developer's machine (as may be the case if the problem is caused by environmental issues or Resource Optimism (see Erratic Test on page X)) fixing the problem may be difficult and time-consuming.
Causes
Cause: Eager Test
A single test verifies too much functionality.
Symptoms
A test exercises several methods of the system under test (SUT) or calls the same method several times interspersed with fixture setup logic and assertions.
public void testFlightMileage_asKm2() throws Exception { // setup fixture // exercise contructor Flight newFlight = new Flight(validFlightNumber); // verify constructed object assertEquals(validFlightNumber, newFlight.number); assertEquals("", newFlight.airlineCode); assertNull(newFlight.airline); // setup mileage newFlight.setMileage(1122); // exercise mileage translater int actualKilometres = newFlight.getMileageAsKm(); // verify results int expectedKilometres = 1810; assertEquals( expectedKilometres, actualKilometres); // now try it with a canceled flight: newFlight.cancel(); try { newFlight.getMileageAsKm(); fail("Expected exception"); } catch (InvalidRequestException e) { assertEquals( "Cannot get cancelled flight mileage", e.getMessage()); } } Example EagerTest embedded from java/com/xunitpatterns/testtemplates/BadExamples.java
Another possible symptom is that the test automater(s) wants to modify the Test Automation Framework (page X) to keep going after an assertion has failed so that the rest of the assertions can be executed.
Root Cause
Eager Test is often caused by trying to minimize the number of unit tests (whether consciously or unconsciously) by verifying many test conditions in a single Test Method (page X). While this is a good practice for manually executed tests that have "liveware" interpreting the results and adjusting the tests in real time, it just doesn't work very well for Fully Automated Tests (see Goals of Test Automation on page X).
Another common cause is using xUnit to automate customer tests that require many steps thus verifying many aspects of the SUT in each test. These tests are necessarily longer than unit tests but care should be taken to keep them as short as possible (but no shorter!)
Possible Solution
For unit tests, we break up the test into a suite of Single Condition Tests (see Principles of Test Automation on page X) by teasing apart the Eager Test. It may be possible to do this by using Extract Method[Fowler] refactoring to pull out independent pieces into their own Test Methods. Sometimes, it is easier to clone the test once for each test condition and then clean up each Test Method by removing any code that is not required for that particular test conditions. Any code required to set up the fixture or put the SUT into the correct starting state can be extracted into a Creation Method (page X). A good IDE or compiler will then help us determine what variables are no longer being used.
If we are automating customer tests using xUnit and this has resulted in many steps in each test because the work-flows require complex fixture setup,we could consider using some other way to set up the fixture for the latter parts of the test. If we can use Back Door Setup (see Back Door Manipulation on page X) to create the fixture for the last part of the test independently of the first part, we have just succeeded in turning one test into two and have improved our Defect Localization (see Goals of Test Automation). We should repeat the process as many times as it takes to make the tests short enough to be single glance readable and to Communicate Intent (see Principles of Test Automation) clearly.
Cause: Missing Assertion Message
Symptoms
A test fails. Upon examining the output of the Test Runner, we cannot determine exactly which assertion had failed.
Root Cause
This is caused by the use of an Assertion Methods (page X) calls with identical or missing Assertion Messages (page X). It is most commonly a problem when running tests using a Command-Line Test Runner (see Test Runner) or a Test Runner that is not integrated with the program text editor or development environment. In this tests, we have a number of Equality Assertions (see Assertion Method):
public void testInvoice_addLineItem7() { LineItem expItem = new LineItem(inv, product, QUANTITY); // Exercise inv.addItemQuantity(product, QUANTITY); // Verify List lineItems = inv.getLineItems(); LineItem actual = (LineItem)lineItems.get(0); assertEquals(expItem.getInv(), actual.getInv()); assertEquals(expItem.getProd(), actual.getProd()); assertEquals(expItem.getQuantity(), actual.getQuantity()); } Example NaiveInlineAssertions embedded from java/com/clrstream/camug/example/test/InvoiceTest.java
When an assertion fails, will we know which one it was? An Equality Assertions typically prints out both the expected and actual values but it may be hard to tell which assertion failed if the expected values are similar or print out cryptically. A good rule of thumb is to include at least a minimal Assertion Message whenever we have more that one call to the same kind of Assertion Method.
Possible Solution
If the problem occurred while running a test using a Graphical Test Runner (see Test Runner) with IDE integration, we should be able to click on the appropriate line in the stack trace back to have the IDE highlight the failed assertion. Failing this, we can turn on the debugger and single-step through the test to see which assertion statement fails.
If the problem occurred while running a test using a Command-Line Test Runner, we can try running the test from a Graphical Test Runner with IDE integration to determine the offending assertion. If that doesn't work, we may have to resort to using line numbers (if available) or using a process of elimination to deduce which of the assertions it couldn't be to narrow down the choice. Of course, we could just bite the bullet and add a unique Assertion Message (even just a number!) to each call to an Assertion Method.
Further Reading
Assertion Roulette and Eager Test were first described in a paper at XP2001 called "Refactoring Test Code" [RTC].
Copyright © 2003-2008 Gerard Meszaros all rights reserved